32 research outputs found
Distributed Optimization of Multi-Beam Directional Communication Networks
We formulate an optimization problem for maximizing the data rate of a common
message transmitted from nodes within an airborne network broadcast to a
central station receiver while maintaining a set of intra-network rate demands.
Assuming that the network has full-duplex links with multi-beam directional
capability, we obtain a convex multi-commodity flow problem and use a
distributed augmented Lagrangian algorithm to solve for the optimal flows
associated with each beam in the network. For each augmented Lagrangian
iteration, we propose a scaled gradient projection method to minimize the local
Lagrangian function that incorporates the local topology of each node in the
network. Simulation results show fast convergence of the algorithm in
comparison to simple distributed primal dual methods and highlight performance
gains over standard minimum distance-based routing.Comment: 6 pages, submitte
Diverse Gaussian Noise Consistency Regularization for Robustness and Uncertainty Calibration
Deep neural networks achieve high prediction accuracy when the train and test
distributions coincide. In practice though, various types of corruptions occur
which deviate from this setup and cause severe performance degradations. Few
methods have been proposed to address generalization in the presence of
unforeseen domain shifts. In particular, digital noise corruptions arise
commonly in practice during the image acquisition stage and present a
significant challenge for current robustness approaches. In this paper, we
propose a diverse Gaussian noise consistency regularization method for
improving robustness of image classifiers under a variety of noise corruptions
while still maintaining high clean accuracy. We derive bounds to motivate and
understand the behavior of our Gaussian noise consistency regularization using
a local loss landscape analysis. We show that this simple approach improves
robustness against various unforeseen noise corruptions by 4.2-18.4\% over
adversarial training and other strong diverse data augmentation baselines
across several benchmarks. Furthermore, when combined with state-of-the-art
diverse data augmentation techniques, we empirically show our method further
improves robustness by 0.6-3\% and uncertainty calibration by 2.1-10.6\% for
common corruptions for several image classification benchmarks.Comment: Under review. Preliminary version accepted to ICML 2021 Uncertainty &
Robustness in Deep Learning Worksho
Adaptive Low-Complexity Sequential Inference for Dirichlet Process Mixture Models
We develop a sequential low-complexity inference procedure for Dirichlet
process mixtures of Gaussians for online clustering and parameter estimation
when the number of clusters are unknown a-priori. We present an easily
computable, closed form parametric expression for the conditional likelihood,
in which hyperparameters are recursively updated as a function of the streaming
data assuming conjugate priors. Motivated by large-sample asymptotics, we
propose a novel adaptive low-complexity design for the Dirichlet process
concentration parameter and show that the number of classes grow at most at a
logarithmic rate. We further prove that in the large-sample limit, the
conditional likelihood and data predictive distribution become asymptotically
Gaussian. We demonstrate through experiments on synthetic and real data sets
that our approach is superior to other online state-of-the-art methods.Comment: 25 pages, To appear in Advances in Neural Information Processing
Systems (NIPS) 201
Kronecker Sum Decompositions of Space-Time Data
In this paper we consider the use of the space vs. time Kronecker product
decomposition in the estimation of covariance matrices for spatio-temporal
data. This decomposition imposes lower dimensional structure on the estimated
covariance matrix, thus reducing the number of samples required for estimation.
To allow a smooth tradeoff between the reduction in the number of parameters
(to reduce estimation variance) and the accuracy of the covariance
approximation (affecting estimation bias), we introduce a diagonally loaded
modification of the sum of kronecker products representation [1]. We derive a
Cramer-Rao bound (CRB) on the minimum attainable mean squared predictor
coefficient estimation error for unbiased estimators of Kronecker structured
covariance matrices. We illustrate the accuracy of the diagonally loaded
Kronecker sum decomposition by applying it to video data of human activity.Comment: 5 pages, 8 figures, accepted to CAMSAP 201